81 research outputs found

    A Probabilistic One-Step Approach to the Optimal Product Line Design Problem Using Conjoint and Cost Data

    Get PDF
    Designing and pricing new products is one of the most critical activities for a firm, and it is well-known that taking into account consumer preferences for design decisions is essential for products later to be successful in a competitive environment (e.g., Urban and Hauser 1993). Consequently, measuring consumer preferences among multiattribute alternatives has been a primary concern in marketing research as well, and among many methodologies developed, conjoint analysis (Green and Rao 1971) has turned out to be one of the most widely used preference-based techniques for identifying and evaluating new product concepts. Moreover, a number of conjoint-based models with special focus on mathematical programming techniques for optimal product (line) design have been proposed (e.g., Zufryden 1977, 1982, Green and Krieger 1985, 1987b, 1992, Kohli and Krishnamurti 1987, Kohli and Sukumar 1990, Dobson and Kalish 1988, 1993, Balakrishnan and Jacob 1996, Chen and Hausman 2000). These models are directed at determining optimal product concepts using consumers' idiosyncratic or segment level part-worth preference functions estimated previously within a conjoint framework. Recently, Balakrishnan and Jacob (1996) have proposed the use of Genetic Algorithms (GA) to solve the problem of identifying a share maximizing single product design using conjoint data. In this paper, we follow Balakrishnan and Jacob's idea and employ and evaluate the GA approach with regard to the problem of optimal product line design. Similar to the approaches of Kohli and Sukumar (1990) and Nair et al. (1995), product lines are constructed directly from part-worths data obtained by conjoint analysis, which can be characterized as a one-step approach to product line design. In contrast, a two-step approach would start by first reducing the total set of feasible product profiles to a smaller set of promising items (reference set of candidate items) from which the products that constitute a product line are selected in a second step. Two-step approaches or partial models for either the first or second stage in this context have been proposed by Green and Krieger (1985, 1987a, 1987b, 1989), McBride and Zufryden (1988), Dobson and Kalish (1988, 1993) and, more recently, by Chen and Hausman (2000). Heretofore, with the only exception of Chen and Hausman's (2000) probabilistic model, all contributors to the literature on conjoint-based product line design have employed a deterministic, first-choice model of idiosyncratic preferences. Accordingly, a consumer is assumed to choose from her/his choice set the product with maximum perceived utility with certainty. However, the first choice rule seems to be an assumption too rigid for many product categories and individual choice situations, as the analyst often won't be in a position to control for all relevant variables influencing consumer behavior (e.g., situational factors). Therefore, in agreement with Chen and Hausman (2000), we incorporate a probabilistic choice rule to provide a more flexible representation of the consumer decision making process and start from segment-specific conjoint models of the conditional multinomial logit type. Favoring the multinomial logit model doesn't imply rejection of the widespread max-utility rule, as the MNL includes the option of mimicking this first choice rule. We further consider profit as a firm's economic criterion to evaluate decisions and introduce fixed and variable costs for each product profile. However, the proposed methodology is flexible enough to accomodate for other goals like market share (as well as for any other probabilistic choice rule). This model flexibility is provided by the implemented Genetic Algorithm as the underlying solver for the resulting nonlinear integer programming problem. Genetic Algorithms merely use objective function information (in the present context on expected profits of feasible product line solutions) and are easily adjustable to different objectives without the need for major algorithmic modifications. To assess the performance of the GA methodology for the product line design problem, we employ sensitivity analysis and Monte Carlo simulation. Sensitivity analysis is carried out to study the performance of the Genetic Algorithm w.r.t. varying GA parameter values (population size, crossover probability, mutation rate) and to finetune these values in order to provide near optimal solutions. Based on more than 1500 sensitivity runs applied to different problem sizes ranging from 12.650 to 10.586.800 feasible product line candidate solutions, we can recommend: (a) as expected, that a larger problem size be accompanied by a larger population size, with a minimum popsize of 130 for small problems and a minimum popsize of 250 for large problems, (b) a crossover probability of at least 0.9 and (c) an unexpectedly high mutation rate of 0.05 for small/medium-sized problems and a mutation rate in the order of 0.01 for large problem sizes. Following the results of the sensitivity analysis, we evaluated the GA performance for a large set of systematically varying market scenarios and associated problem sizes. We generated problems using a 4-factorial experimental design which varied by the number of attributes, number of levels in each attribute, number of items to be introduced by a new seller and number of competing firms except the new seller. The results of the Monte Carlo study with a total of 276 data sets that were analyzed show that the GA works efficiently in both providing near optimal product line solutions and CPU time. Particularly, (a) the worst-case performance ratio of the GA observed in a single run was 96.66%, indicating that the profit of the best product line solution found by the GA was never less than 96.66% of the profit of the optimal product line, (b) the hit ratio of identifying the optimal solution was 84.78% (234 out of 276 cases) and (c) it tooks at most 30 seconds for the GA to converge. Considering the option of Genetic Algorithms for repeated runs with (slightly) changed parameter settings and/or different initial populations (as opposed to many other heuristics) further improves the chances of finding the optimal solution.

    Variable Selection for Market Basket Analysis

    Get PDF
    Market basket analysis; cross category effects; variable selection; multivariate logit model; pseudo likelihood estimation

    Endogeneity of marketing variables in multicategory choice models

    Get PDF
    A regressor is endogenous if it is correlated with the unobserved residual of a model. Ignoring endogeneity may lead to biased coefficients. We deal with the omitted variable bias that arises if firms set marketing variables considering factors (demand shocks) that researchers do not observe. Whereas publications on sales response or brand choice models frequently take the potential endogeneity of marketing variables into account, multicategory choice models provide a different picture. To consider endogeneity in multicategory choice models, we follow a two-step Gaussian copula approach. The first step corresponds to an individual-level random coefficient version of the multivariate logit model. We analyze yearly shopping data for one specific grocery store, referring to 29 product categories. If the assumption of a Gaussian correlation structure is met, the copula approach indicates the endogeneity of a category-specific marketing variable in about 31% of the categories. The majority of marketing variables rated as endogenous are positively correlated with the omitted variable, implying that ignoring endogeneity leads to an overestimation of the coefficients of the respective marketing variable. Finally, we investigate whether taking endogeneity into account by the copula approach leads to different managerial implications. In this regard, we demonstrate that for our data ignoring endogeneity often suggests a level of marketing activity that is too high

    Relevance of dynamic variables in multicategory choice models

    Get PDF
    We investigate the relevance of dynamic variables that reflect the purchase history of a household as independent variables in multicategory choice models. To this end, we estimate both homogeneous and finite mixture variants of the multivariate logit model. We consider two types of dynamic variables. Variables of the first type, which previous publications on multicategory choice models have ignored, are exponentially smoothed category purchases, which we simply call category loyalties. Variables of the second type are log-transformed times since the last purchase of any category. Our results clearly show that adding dynamic variables improves statistical model performance with category loyalties being more important than log-transformed times. The majority of coefficients of marketing variables (features, displays, and price reductions), pairwise category interactions, and cross-category relations differ between models either including or excluding dynamic variables. We also measure the effect of marketing variables on purchase probabilities of the same category (own effects) and on purchase probabilities of other categories (cross effects). This exercise demonstrates that the model without dynamic variables tends to overestimate own effects of marketing variables in many product categories. This positive omitted variable bias provides another explanation for the well-known problem of “overpromotion” in retailing

    Semi-parametrische Marktanteilsmodellierung

    Get PDF
    In der vorliegenden empirischen Untersuchung erreichen Marktanteilsmodelle mit semi-parametrischen additiven Markenattraktionen bessere Anpassungsmaße sowohl nach einem Informationskriterium wie AIC, das ein Modell fĂŒr die Anzahl verbrauchter Freiheitsgrade bestraft, als auch nach mittels Kreuzvalidierung oder Bootstrapping bestimmten Fehlermaßen. Die höhere FlexibilitĂ€t gegenĂŒber strikt parametrischen Modellen fĂŒhrt zu einer verlĂ€ĂŸlicheren Messung der Effekte von Marketing-Instrumenten. Außerdem unterscheiden sich marginale Effekte und PreiselastizitĂ€ten, die auf Grundlage des semi-parametrischen Modells berechnet werden, qualitativ von jenen, die man fĂŒr die parametrischen Alternativen erhĂ€lt. Das flexiblere Marktanteilsmodell impliziert unterschiedliche, mit Gewinnsteigerungen verbundene optimale Entscheidungen, wie mit Hilfe des Lösungskonzepts Fictitious Play bestimmte Preise und Gewinne zeigen. (Autorenreferat)Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    Hidden Variable Models for Market Basket Data. Statistical Performance and Managerial Implications

    Get PDF
    We compare the performance of several hidden variable models, namely binary factor analysis, topic models (latent Dirichlet allocation, correlated topic model), the restricted Boltzmann machine and the deep belief net. We shortly present these models and outline their estimation. Performance is measured by log likelihood values of these models for a holdout data set of market baskets. For each model we estimate and evaluate variants with increasing numbers of hidden variables. Binary factor analysis vastly outperforms topic models. The restricted Boltzmann machine and the deep belief net on the other hand attain a similar performance advantage over binary factor analysis. For each model we interpret the relationships between the most important hidden variables and observed category purchases. To demonstrate managerial implications we compute relative basket size increase due to promoting each category for the better performing models. Recommendations based on the restricted Boltzmann machine and the deep belief net not only have lower uncertainty due to their statistical performance, they also have more managerial appeal than those derived for binary factor analysis. The impressive performances of the restricted Boltzmann machine and the deep belief net suggest to continue research by extending these models, e.g., by including marketing variables as predictors

    Heuristic pricing rules not requiring knowledge of the price response function

    Get PDF
    Heuristic rules are appropriate, if a decision maker wants to set the price of a new product or of a product, whose past price variation is low, and budget limitations prevent the use of marketing experiments or customer surveys. Whereas such rules are not guaranteed to provide the optimal price, generated profits should be as close as possible to their optimal values. We investigate eleven pricing rules that do not require that a decision maker knows the price response function and its parameters. We consider monopolistic market situations, in which sales depend on the price of the respective product only. A Monte Carlo simulation that is more comprehensive than extant attempts found in the literature, serves to evaluate these rules. The best performing rules either hold price changes between periods constant or make them dependent on the previous absolute price difference. These rules also outperform purely random price setting, which we use as benchmark. On the other hand, rules based on arc elasticities or on a loglinear approximation to sales and prices, turn out to be even worse than random price setting. In the conclusion, we discuss how heuristic pricing rules may be extended to deal with product line pricing, additional marketing variables (e.g., advertising, sales promotion, and sales force) and a duopolistic market situation

    Resource Allocation Heuristics for Unknown Sales Response Functions with Additive Disturbances

    Get PDF
    We develop an exploration-exploitation algorithm which solves the allocation of a fixed resource (e.g., a budget, a sales force size, etc.) to several units (e.g., sales districts, customer groups, etc.) with the objective to attain maximum sales. This algorithm does not require knowledge of the form of the sales response function and is also able cope with additive random disturbances. The latter as a rule are a component of sales response functions estimated by econometric methods. We compare the algorithm to three rules of thumb which in practice are often used for this allocation problem. The comparison is based on a Monte Carlo simulation for five replications of 192 experimental constellations, which are obtained from four function types, four procedures (i.e., the three rules of thumb and the algorithm), similar/varied elasticities, similar/varied saturations, and three error levels. A statistical analysis of the simulation results shows that the algorithm performs better than the three rules of thumb if the objective consists in maximizing sales across several periods. We also mention several more general marketing decision problems which could be solved by appropriate modifications of the algorithm presented
    • 

    corecore